Just Because AI Can Do This Doesn’t Mean It Should

It’s hard to miss the trend: more and more companies are using the rapid advances in AI to deliver some of the most difficult news imaginable – job changes, reorganizations, even layoffs – with as little human interaction as possible. Reports this spring described employees at Oracle learning their roles had been eliminated via an early-morning email from leadership, followed by near-immediate system lockouts and a DocuSign packet.

As I hear stories like this, I don’t think about the amazing technological advances we’ve made; I think about the human experience of these choices. And I think about the leadership decisions that were made to implement those actions.  

How Sam found out their job was changing

Earlier this year I sat perched on the edge of a counter in a small conference room filled with more than double the number of people it was designed to accommodate. It was the end of a long day of training sessions. This meeting was the only thing separating us from a well-earned glass of wine and casual catching up. As change management consultants who crisscrossed the globe for our clients, we rarely got to see each other in person, so we were looking forward to this brief luxury of human interaction.

At the front of the room, one of our colleagues began sharing some recent output from a practice innovation session she’d attended with several other leaders about the future of change management in an AI world. They were trying to reimagine how change management methodologies and processes would evolve with the power of AI agents by our side. 

She enthusiastically shared a hypothetical near-future example of a worker being notified, with the help of AI Agents, that their job will be changing. The example went something like this:

To support a multidimensional transformation at a large organization, a pod of AI Agents was built to automate communication workflow and personalize the experience for each employee impacted. Here’s what “Sam’s” experience could look like:

8:30 AM –   Sam arrives at her desk and begins work for the day

9:00 AM –   They receive a pop-up video message from a friendly-looking AI bot

“Hey Sam, we wanted to let you know your role is going to be shifting a bit. As you are aware, we’ve been automating some of the processes you do manually right now and we’re launching the automation soon. With that off your plate, there are some new, more strategic and engaging things we’d like you to take on. You’re going to receive additional information and training about that work today. Please be sure to prioritize the time to start learning and let your manager know if you have any questions.”

9:15 AM –   Sam then receives an email signed by their manager explaining their new responsibilities with a link to her new job description.

9:30 AM –   Sam receives a notification that new training has been assigned to them. They also receive an invitation to a virtual team meeting at 11:00 from their manager.

10:00 AM – Sam logs in to the learning platform and begins their assigned training. A chat bot pops up on her desktop asking Sam to submit any questions they want to have answered at the team meeting.

11:00 AM – Sam attends the team meeting where their questions are proactively addressed by their manager.

As the example unfolded, I remember feeling surprised and increasingly uncomfortable. I looked around the packed room for any signs of discomfort from others and saw none.

At the end of her example, the presenter beamed with pride and asked “What do you all think? Is this possible?”

I concentrated on holding my face still to hide my shock and looked around the room again. The crowd was smiling. A few enthusiastic thumbs-ups popped up, a couple light claps and plenty of nodding heads.

I was dumbfounded.

On one hand, I understand the temptation to applaud the coordination, planning and the attempt to personalize this experience. I know first-hand what it takes to execute something like this behind the scenes, and it’s no small feat, even with AI support.

However, what was glaringly missing for me was empathy.

This scenario is possible, yes, but why would we want to do that? What kind of experience is that for the employee? How would you feel if you found out about a change to your job in this unempathetic, automated way?

What the biology of change reveals

Beyond the psychological concern for the impacted employee, when we understand the biology behind change, it becomes clear that an approach like this can undermine the very outcomes the change is intended to achieve.

When Sam was told their job was changing, it’s highly likely that moment immediately triggered a significant biological and emotional response that derailed their productivity.

Humans are naturally wired to perceive change as a threat and when threatened, our self-preservation instincts activate. This acute stress response, commonly referred to as ‘fight, flight or freeze’, is hard coded into our bodies and is incredibly helpful in situations where physical harm is possible, but far less helpful when the perceived threat is emotional.

When our amygdala – the emotional center of our brains – detects a threat, like a sudden loss of job responsibilities, it signals the hypothalamus to dump adrenaline and cortisol into our bloodstream to get the body ready to respond. This chemical surge heightens our senses, energy and physical capabilities causing involuntary symptoms such as an increase in heart rate and blood pressure, shallow breathing, muscle tension, and extreme alertness.

At the same time, this response shuts down non-essential functions like digestion (that sudden “pit in your stomach”), our immune responses and more importantly our prefrontal cortex. This is the area of the brain responsible for rational decision-making, memory and long-term planning. It’s also responsible for emotional regulation, which explains why when we experience sudden change, we also experience heightened emotions like shock, fear, anger, panic, sadness or even excitement. In that moment, we are feeling the amygdala kick into overdrive.

To complicate matters, this dump of adrenaline takes an average of 30 mins to wear off, assuming we are not further bombarded by threats during that time. Which in Sam’s case they were.

When Sam suddenly learned at 9:00 AM that their job was changing, the stress response triggered, and the internal 30-minute timer began. Fifteen minutes later, the email from their manager arrived, which, in their heightened state of alertness likely triggered another stress response … 30 more minutes. Fifteen minutes later another notification, and the cycle began again, each instance dumping more adrenaline into their bloodstream.

By the time Sam got to the 11:00 meeting, their brain and body would have been completely hijacked with stress, making it very difficult to listen and be receptive, let alone enthusiastic about, the benefits of this change.

AI automation makes the speed of this hypothetical scenario possible because the company does not have to rely on human time and capacity to meet with each person individually and explain the job change.

On paper, a company that implemented a process like this could log it as a win, saving hours of productivity. A process that used to take days – notifying a group of employees of changes to their jobs and deploying training – only took hours!

But the real issue in this scenario isn’t speed; it’s removing human accountability and ignoring both the emotional impact and hidden productivity loss of automating a moment of meaning.

Triggered by stress (and not knowing when and if another email might arrive in their inbox), Sam and the colleagues who received similar messages likely found it very difficult to be productive that day and possibly even days into the future; undermining the very intention of the automation and impacting the company’s bottom line in unintentional ways. Enthusiastic about the possibility of using AI to optimize for efficiency, leaders missed interrogating the very real human costs.

But what’s ultimately at stake here isn’t the efficiency, productivity loss or even the use of new technology, it’s the effects of choices leaders make about how technology shows up in the times that matter most. These are the moments that shape how people experience their organization: whether they feel respected, reduced, seen, or processed.

A better way: EQ + AI

Decisions about when to automate and how to treat people in the process are not neutral; they quietly define expectations, signal priorities, and establish the boundaries of what is acceptable. Over time, those choices compound, becoming embedded in the very fabric of an organization’s culture and in both employee and customer expectations of how they will be treated.

As AI becomes increasingly capable, we must diligently consider how to set and hold the line between what is possible and what is the right thing to do; what AI can do versus what AI should do.

For example, here’s a more human-centric approach to the Sam scenario that still leverages AI as an accelerator:

8:30 AM –   Sam arrives at her desk and begins work for the day

9:00 AM –   She receives an invitation from her direct manager for a team meeting entitled “Team Discussion – Please Prioritize”

10:00 AM – Sam attends the team meeting where she learns about changes to her role.

At the meeting, her manager shares: “I want to talk with you about some changes coming to your roles soon. As you are aware, we’ve been working on automating some of the processes we do manually, and we’re launching the automation soon. What that means is that some of the work you do will be shifting, not because we’re taking something away or because of any performance concerns, but because we want you to be able to focus on some new things that are going to be more impactful for the company’s future and for our customers. To set you up for success, you’re going to receive additional information and training about this new work starting today. Please be sure to prioritize the time to start learning.”

Sam’s manager gives everyone the opportunity to ask questions during the meeting and also lets them know that they can reach out with any questions that come up later.

11:00 AM – Sam receives a notification that new training has been assigned to her.

11:05 AM – Sam logs in to the learning platform and begins her assigned training. An AI chat bot pops up on her desktop offering to help curate her learning experience, reprioritize her time and answer any questions she has.

Instead of starting Sam’s morning with a surprise pop-up from a bot, the emotionally charged news came from a human in a way that signaled care and respect before anything else.  Only after the human conversation did AI step in as a welcomed support tool. AI increased clarity and follow-through, while a person carried the emotional weight of a pivotal moment.

This approach does not guarantee that Sam will embrace their changing role instantly - or that she and her colleagues will immediately get back to work with exceptional productivity. But it significantly increases the chances and provides a pathway to change where they don’t feel “processed.”

Using the ‘should we’ test

If we’re going to thrive together as humans and not let AI dictate our future, we need to find a way to balance AI capabilities with empathy, mindfulness and the ability to discern when AI is appropriate and when a human touch is necessary.

Here’s a simple “Should We” test leaders can use when making decisions about when to use AI. Ask yourself:

  1. Am I using AI to replace a sensitive or important human connection?

  2. What are the stakes and who is accountable if AI gets it wrong?

  3. Would I want to tell my external stakeholders that I used AI for that?

If you have doubts about your answers to these questions, forego the AI or find different ways to leverage it.

The call to action for all leaders is to make sure the question ‘Is this possible with AI?’ is always accompanied with “Should we do this with AI?”.

It’s the decisions leaders make about how, and when to use AI that will define the future of our work.

Meg Roman

Meg is a strategic change and communications leader with 20+ years advising the c-suite. She’s on a mission to help executives hone their competitive advantage, elevate their team's capabilities and thrive in the rapidly evolving age of AI. Meg is the founder and owner of Rome& (a strategic change and communications firm), a fractional Chief of Staff, a former industry exec and proud Big 4 escapee. When she’s not in the thick of complex change, she spends her time with her fur baby, traveling with the Team Roman Trio, playing mediocre tennis or doing anything related to wind, water or wine.

https://rome-and-consulting.com